Oktober / October 2025 The Monthly Magazine of the SOUTH AFRICAN VETERINARY ASSOCIATION Die Maandblad van die SUID-AFRIKAANSE VETERINÊRE VERENIGING Reproductive disorders of backyard poultry CPD THEME Opthalmology nuus•news Access to CPD Articles: https://www.sava.co.za/vetnews-2025/ VET
Dagboek • Diary Ongoing / Online 2025 October 2025 SAVETCON: Webinars Info: Corné Engelbrecht, SAVETCON, 071 587 2950, corne@savetcon.co.za / https://app.livestorm.co/svtsos Acupuncture – Certified Mixed Species Course Info: Chi University: https://chiu.edu/courses/cva#aboutsouthafrica@tcvm.com SAVA Johannesburg Branch CPD Events Monthly - please visit the website for more info. Venue: Johannesburg Country Club Info: Vetlink - https://savaevents.co.za/ Oranje Vaal CPD Day 11 October Venue: Afridome, Parys Info: conference@savetcon.co.za Northern Natal and Midlands Branch Congress 11-12 October Venue: Fordoun Hotel and Spa, Midlands Info: https://vetlink.co.za/northern_natal_and_midlands/ The Middle East & Africa Veterinary Congress (MEAVC) 17 -19 October: Pre- and Main Congress Workshops Venue: Jafza One Convention Centre, Dubai Info: www.meavc.com SAVA Free State and Northern Cape Branch Congress 17-18 October Venue: Goose Hill Guest Farm, Bloemfontein Info: conference@savetcon.co.za KwaZulu-Natal Branch Congress 25-26 October Venue: San Lameer Resort, Southbroom Info: www.vetlink.co.za 11th International Sheep Veterinary Congress 27-31 October Venue: Wollongong, New South Wales, Australia Info: https://intsheepvetassoc.org/11th-isvc-2025 February 2026 March 2026 June 2026 Septembr 2026 10th Annual South African Immunology Society (SAIS) Congress 30 October – 01 November Venue: Garden Court Marine Parade, Durban (KZN) Info: corne@savetcon.co.za or visit www.savetcon.co.za Southern Cape Branch Congress 31 October – 01 November Venue: Oubaai Hotel Golf & Spa, George Info: https://vetlink.co.za/southern-cape-branch/ SAEVA Congress 2026 19 - 22 February Venue: Champagne Sports Resort, Drakensberg, KwaZulu Natal Info: https://vetlink.co.za/saeva-congress-2026/ Wildlife Group of the SAVA Congress 2026 12 - 14 March Venue: 26° South Hotel , Muldersdrift, Gauteng Info: https://vetlink.co.za/wildlife-group-2026/ NVCG: The Veterinary Masterclass: Neurology 17 - 18 March Venue: Cape Town (venue to be confirmed) 19 - 20 March Venue: Johannesburg (venue to be confirmed) Info: Vetlink: www.nvcg.co.za or +2712 346 1590 13th Int Crustacean Society Mid-Year Meeting 01-04 June Venue: STIAS – Stellenbosch Info: https://tcs2026.com/ RuVASA Congress 2026 08 - 10 June Venue: Champagne Sports Resort, Drakensberg, KZN Info: https://vetlink.co.za/ruvasa-congress-2026/ NVCG: The Vet Masterclass: Medicine & Dermatology 19 - 20 September Venue: Johannesburg (venue to be confirmed) 21 - 22 September Venue: Cape Town (venue to be confirmed) Info: Vetlink: www.nvcg.co.za or +2712 346 1590 SAAVT Biennial Congress 30 September - 01 October Venue: Krystal Beach Hotel – Gordon’s Bay Info: conference@savetcon.co.za
Vetnuus | October 2025 1 Contents I Inhoud President: Dr Ziyanda Qwalela president@sava.co.za Interim General Manager: Ms Sonja Ludik sonja@sava.co.za/ +27 (0)12 346 1150 Editor VetNews: Ms Andriette van der Merwe vetnews@sava.co.za Accounts / Bookkeeping: Ms Shaye Hughes accounts@sava.co.za/+27 (0)12 346 1150 Reception: Ms Hanlie Swart reception@sava.co.za/ +27 (0)12 346 1150 Marketing & Communications: Ms Sonja van Rooyen marketing@sava.co.za/ +27 (0)12 346 1150 Membership Enquiries: Ms Debbie Breeze debbie@sava.co.za/ +27 (0)12 346 1150 Vaccination Booklets: Ms Debbie Breeze debbie@sava.co.za/ +27 (0)12 346 1150 South African Veterinary Foundation: Ms Debbie Breeze savf@sava.co.za/ +27 (0)12 346 1150 Community Veterinary Clinics: Ms Claudia Cloete manager@savacvc.co.za/ +27 (0)63 110 7559 SAVETCON: Ms Corné Engelbrecht corne@savetcon.co.za/ +27 (0)71 587 2950 VetNuus is die amptelike publikasie van die Suid Afrikaanse Veterinêre Vereeniging (SAVV). Alle regte word voorbehou. Geen deel van hierdie publikasie mag aangehaal, gedupliseer, versprei of aan die publiek beskikbaar gestel word in enige vorm sonder die uitdruklike skriftelike toestemming van die SAVV nie.Hierdie publikasie is uitsluitelik bedoel vir veearts en veearts verwante professionele persone soos deur die Suid Afriaanse Veterinêre Raad erken word. Wyl alles moontlik gedoen word om om die akkuraatheid van die inhoud te verseker, aanvaar nie die redaksie, SAVV of enige van die personeel, lede, werknemers of agente enige regsaanspreeklikheid vir enige verlies, skade of bevooroordeeldheid, hetsy direk of indirek, wat mag spruit uit enige stelling, feit, opinie, advertensie of aanbeveling hierin gepubliseer. Enige advertensie of verwysing na n spesifieke produk is toevallig en word nie noodwending onderskryf of aanbeveel deur die SAVV nie. VetNews is the official publication of the South African Veterinary Association (SAVA). All rights are reserved. No part of this publication may be quoted, reproduced, distributed, or made publicly available in any form or by any means without the prior express written consent of SAVA. This publication is intended solely for veterinarians and paraveterinary professionals as recognised by the South African Veterinary Council. While every effort is made to ensure the accuracy of the content, neither the editorial board, SAVA, nor any of its office bearers, members, employees, or agents shall be held liable for any loss, damage, or prejudice, whether direct or consequential, arising from any statement, fact, opinion, advertisement, or recommendation published herein. The inclusion of advertising or reference to specific products or services does not imply endorsement by SAVA. STREET ADDRESS 47 Gemsbok Ave, Monument Park, Pretoria, 0181, South Africa POSTAL ADDRESS P O Box 25033, Monument Park Pretoria, 0105, South Africa TELEPHONE +27 (0)12 346-1150 FAX General: +27 (0) 86 683 1839 Accounts: +27 (0) 86 509 2015 WEB www.sava.co.za CHANGE OF ADDRESS Please notify the SAVA by email: debbie@sava.co.za or letter: SAVA, P O Box 25033, Monument Park, Pretoria, 0105, South Africa CLASSIFIED ADVERTISEMENTS (Text to a maximum of 80 words) Sonja van Rooyen assistant@sava.co.za +27 (0)12 346 1150 DISPLAY ADVERTISEMENTS Sonja van Rooyen assistant@sava.co.za +27 (0)12 346 1150 DESIGN AND LAYOUT Sonja van Rooyen PRINTED BY Business Print: +27 (0)12 843 7638 VET Diary / Dagboek II Dagboek • Diary Regulars / Gereeld 2 From the President 4 Editor’s notes / Redakteurs notas Articles / Artikels 8 Application of Computer Vision Methods in Veterinary Ophthalmology 32 Blindfolding Blind? Association / Vereniging 34 SAVETCON News 35 SAVA News 38 In Memoriam Vet's Health / Gesondheid 41 Life Coaching Technical / Tegnies 42 Ophthalmology Column 44 Royal Canin Column Marketplace / Markplein 46 Marketplace Jobs / Poste 47 Jobs / Poste 48 Classifieds / Snuffeladvertensies 8 32 Click on the image to access Vetnews CPD articles « nuus•news 24-Hour Toll-Free Helpline: 0800 21 21 21
Vetnews | Oktober 2025 2 « BACK TO CONTENTS What a whirlwind month September has been! I would like to congratulate and thank all our colleagues who freely gave of their time and skills to support the vaccination, awareness, and sterilisation campaigns conducted during this period. Your generosity does not go unnoticed, and it reflects the profession’s enduring commitment to animal health, one health and the public good. The constitution of the South African Veterinary Council (SAVC) continues to dominate conversations within the fraternity. At its meetings held on 26 and 27 September respectively, the SAVA Board of Directors and Federal Council considered the ongoing impasse regarding the constitution of the SAVC, the notice of motion received as part of the legal proceedings between the SAVC and the Minister of Agriculture, as well as various communications from members and Special Interest Groups. Both meetings resolved to create a poll to allow members to express their preferences regarding the matter. Several possible approaches have been identified to address the situation. These options are not mutually exclusive, and any course of action will require cooperation by the parties concerned. This poll is intended solely to gauge the preferences of SAVA members. While the aggregated results will be shared with the relevant parties where applicable, the preferences expressed will not necessarily determine or influence the outcome of the matter. It was also clarified at FEDCO that despite some written communication, the parties have not yet formally met to resolve the issue. Both meetings emphasised that the desired end state is the constitution of a credible veterinary council within the legal framework as soon as possible. The President has accordingly been tasked to engage the National Department of Agriculture and Ministry to create a space for dialogue and the resolution of the impasse. In light of these discussions, the meetings also reflected on how we as colleagues use SAVA platforms to engage on important matters. While these platforms cannot be regulated completely, there will always be a need for a degree of selfawareness and discernment about what and how we post. We are all passionate about the profession, and that passion is a strength. At the same time, let us be mindful of how we express ourselves: engaging in good faith, with integrity, and with mutual respect. This approach ensures that robust debates remain valuable learning opportunities for all and strengthen, rather than fracture, our professional community. This consciousness is amplified when it comes to senior members of the profession as well as portfolio holders of the organisation. On behalf of the Board and Federal Council, I would like to extend our sincere appreciation to our outgoing SAVA representative on the SAVC, Dr Brendan Tindall. Your dedication, professionalism, and tireless advocacy for the interests of the profession have been invaluable. We are deeply grateful for the time, expertise, and commitment you have given to uphold high standards of governance and ethical practice, and we wish you well in your future endeavours. We also wish to recognise the contribution of our incoming SAVA representative on the SAVC, Dr. Leon De Bruyn. Stepping into this role has been something of a baptism of fire, yet you have already contributed significantly to creating balance and perspective in the ongoing debates around the SAVC. Your willingness to engage constructively at such a critical time is commendable, and we look forward to your continued leadership and insight as we navigate these challenges together. Similarly, our new Interim General Manager, Sonja Ludik, has also experienced something of a baptism of fire and is already thriving in her role. We warmly welcome you, Sonja, and trust that you will find the challenges both stimulating and rewarding, and that the experience gained will prove invaluable in your professional growth. Another issue discussed was the live export by sea of animals destined for slaughter. Both meetings agreed that a similar poll of members should be conducted on this subject. The SAVA Animal Ethics and Welfare Group with the assistance of Professor. Gareth Bath, who has contributed significantly in this area, has been tasked with drafting an appropriate statement for members to consider via the poll. This issue is complex and calls on veterinarians to provide leadership grounded in ethical and scientific principles and practices, a clear understanding of the applicable legal framework, and awareness of the socio-economic impacts. It was highlighted at both meetings that SAVA can advise and provide guidance, but it is not responsible for regulatory functions and cannot dictate the implementation of legal instruments in this matter. Coming up this month is the much-anticipated WVA VPP Workshop, to be held in Pretoria. The workshop will address concerns regarding the implementation of an FAO project assisting VPPs to open their own practices in three African countries, and will also explore the possibility of similar projects for veterinarians. Nominations for attendance have been requested from the NVCG, RUVASA, and the branches where the project was piloted—namely the North West, Eastern Cape, and Karoo. The Strategy Development Committee, working closely with FINCO, has continued to focus on safeguarding SAVA’s financial viability. This has proved to be an uphill task given delayed HWSETA payments, the slow realisation of sponsorships, and outstanding membership fees. The key challenge remains the need for SAVA to operate strictly within its means, necessitating a leaner and more efficient administration. Despite this, SAVA has managed to reduced its operational costs significantly, a phenomenal performance and commendable achievement by all involved. The issue of outstanding membership fees remains a critical concern. Branches and groups are therefore urged to actively encourage their members to regularise their subscriptions, as this revenue is essential to sustaining the work of the Association and ensuring its long-term effectiveness. Thank you once again to all members who continue to give of their time, energy, and expertise. These collective efforts strengthen our profession and ensure that SAVA remains a credible voice for veterinarians. v Groetnis! Ziyanda President of the South African Veterinary Association From the President Dear members, Let us keep the pace and remain impactful!
Vetnuus | October 2025 3 To find out more: You are looking for a be琀琀er way to exit from or sell your practice. You want to become a shareholder. www.companion.partners WhatsApp View Video Download Value Proposition co.mpanion is not a corporate body, it is a professional owned and led veterinary model that is right for you if: Building be琀 琀er practice together. co.llaborative model that gives you the ownership, support and autonomy Image: Dr Werner Odendaal, Shareholder & Team Member,
Vetnews | Oktober 2025 4 « BACK TO CONTENTS What is the eye? The eye is a fluid-filled bulb that allows us to observe by sight. A miracle in itself. At exactly the right pressure, it allows us to see. It is the complex sensory organ responsible for sight, working by focusing light onto the light-sensitive retina at the back of the eye. The retina converts this light into electrical signals that the optic nerve transmits to the brain, creating an image we perceive. The eye’s structures, including the protective outer layers, the vascular middle layer, and the light-sensitive retina, work together with fluids like aqueous and vitreous humour, and the focusing lens to achieve vision. I watched a video of a lens replacement once, and I was mesmerised by the intricacy of eye surgery. To be able to do surgery on such a sensitive organ is just mind-blowing. My one son-in-law is stepping into the world of ophthalmology, and I am always stunned by the effect of cataract removal or other corrective surgeries. A person goes from not being able to see, or to see with difficulty, to being able to see clearly. Something a lot of us take for granted. With World Eyesight Day this month, I want to encourage all to also you to look after your own eyes. Your eyes give you the ability to do your daily job and to enjoy the wonderful world around you. Do not neglect this one sense that brings so much joy. Last month, we celebrated International Rabies Day, and I hope everybody has recovered from the myriad of activities that took place. In Hoedspruit, we had the sad case of a dog who missed out on a rabies campaign earlier and was subsequently diagnosed with rabies. A stark reminder that it is still out there, and every animal not vaccinated is at risk. But also puts the people at risk. Vetnews thanks every person for every rabies shot given, every person educated in the importance of rabies control. We still strive for zero rabies in 2030. We think of the hundreds of hectares of veld that were destroyed in Etosha and the surrounding farms. It is devastating after the good rains they had last summer. This map is a satellite image of 26 September 2025 and indicates fires in Southern Africa. A scary picture indeed, but a good reflection of the seasonal fires of subsistence farmers that prepare for the new planting season. We pray for good rain in all the affected areas. May your October be a colourful array of blessings. v Andriette From the Editor Editor’s notes / Redakteurs notas
Vetnuus | October 2025 5 1 2 Long-term support for their skin — and your treatment plan. © Hill’s Pet Nutrition, Inc. 2024 V32800; V32802; V35148 Clinically proven nutrition for both food and environmental allergies supported by 4 clinical studies Nutrition formulated to support the skin barrier against environmental allergies – year round itching eased joy retrieved SCIENCE DID THAT. Derm Complete Adult is also available in a Mini kibble for small dogs Derm/Helicopter/VetNews
+2712 346 1590 More Information https://www.nvcg.co.za/ Visit our website ...... Prof. Dr.med.vet. Ralf S. Mueller, DipACVD, FANZCVSc (Dermatology), DipECVD 19 - 20 September 2026 Johannesburg th th THE VETERINARY MASTERCLASS 2026 | SEPTEMBER MEDICINE & DERMATOLOGY Stay ahead with cutting-edge expertise National Veterinary Clinicians Group of the SAVA Advancing Veterinary Practice for Small Animal Vets ...... 17-18 March 2026 Cape Town 19 - 20 March 2026 Johannesburg 21 - 22 September 2026 Cape Town st nd Prof Katrin Hartmann Prof. Dr. med. vet., Dr. habil. Hartmann, Katrin, Prof., Dipl. ECVIMCA (Internal Medicine) SPEAKERS Prof Ralf S Meuller +2712 346 1590 www.nvcg.co.za THE VETERINARY MASTERCLASS 2026 | MARCH NEUROLOGY Steven De Decker DVM PhD DipECVN MvetMed PGCertVetEd FHEA MRCVS Steven De Decker graduated from Ghent University in Belgium. After graduation, he performed a rotating internship there and undertook a PhD studying ‘wobbler syndrome’ in dogs. This was followed by a Residency in Neurology and Neurosurgery at the Royal Veterinary College. He is Senior Lecturer and the Head of Service of the neurology and neurosurgery team at the Royal Veterinary College. Although he is interested in all aspects of veterinary neurology, most of his research and publications focus on spinal disorders and neurosurgery. National Veterinary Clinicians Group of the SAVA Advancing Veterinary Practice for Small Animal Vets Save the Date Stay ahead with cutting-edge expertise
“The South African Veterinary Association aims to serve its members and to further the status and image of the veterinarian. We are committed to upholding the highest professional and scientific standards by utilising the professional knowledge, skill and resources of our members, to foster close ties with the community and thus promote the health and welfare of animals and mankind”. MISSION STATEMENT Servicing and enhancing the veterinary community since 1920! Tel: 012 346 1150 E-mail: vethouse@sava.co.za www.sava.co.za
Vetnews | Oktober 2025 8 « BACK TO CONTENTS Application of Computer Vision Methods in Veterinary Ophthalmology An excerpt from a DOCTORAL THESIS by Matija Burić of the UNIVERSITY OF RIJEKA FACULTY OF INFORMATICS AND DIGITAL TECHNOLOGIES ABSTRACT This dissertation applies computer vision techniques, leveraging deep learning models such as U-Net and GPT-4o, to improve the diagnosis of canine eye diseases in veterinary ophthalmology. The DogEyeSeg4 dataset of real-world clinical images serves as the foundation for training. Synthetic images augmented the dataset to enhance model robustness and generalisation. U-Net(RSD), trained on DogEyeSeg4 and synthetic images generated using Stable Diffusion, was used for precise segmentation of canine eye symptoms such as corneal cloudiness, scleral redness, excessive tearing, and colored mass protrusion in the eye corner. The study also trained individual binary segmentation models for each symptom, utilising heatmaps from SSD eye detection to reduce false positives. Although these binary models improved symptom isolation, they faced challenges with overlapping conditions and increased complexity. Ultimately, the multiclass UNet(RSD) model provided better overall performance and efficiency. GPT-4o interpreted the segmented images, outperforming other Large Language Models (LLMs) in generating accurate diagnostic suggestions, particularly when using segmentation masks from the adjusted U-Net with a ResNet backbone alongside the original images. Despite promising results, challenges remain in diagnosing complex or subtle conditions like corneal ulcers. Future work includes expanding the dataset and symptom range, improving model architectures, and integrating multimodal data for more holistic diagnostics. These findings underscore the potential for AI-driven tools to revolutionise veterinary ophthalmology, offering more accurate and efficient diagnostic processes that can ultimately improve animal care.
Vetnuus | October 2025 9 Leading Article 1. INTRODUCTION 1.1. Problem and Research Subject with Hypothesis The field of veterinary ophthalmology faces significant challenges in diagnosing and managing ocular conditions in canines, primarily due to the limited availability of advanced diagnostic tools. While human medicine has seen remarkable progress with the integration of computer vision techniques, especially in the domain of ophthalmology, these technological advancements have not been fully leveraged in veterinary applications. The disparity between the diagnostic capabilities available in human and veterinary medicine highlights a critical gap that this research aims to address. The core problem of this dissertation is to bridge this gap by developing and applying computer vision methods specifically tailored for canine eye disease detection and diagnosis. The subject of this research centres on the innovative application of deep learning models, particularly those designed for image segmentation, to enhance the accuracy and efficiency of diagnosing ocular conditions in dogs. The research explores the potential of adapting and optimising these models, which have proven effective in human medical imaging, to meet the unique challenges posed by veterinary ophthalmology. This includes dealing with diverse imaging conditions, varying image quality, and the need for precise segmentation of specific ocular features. To guide this research, the following hypotheses have been formulated: • A Computer vision model can recognise certain canine ocular conditions in still images taken in an unconstrained environment. • Modification of the input and architecture of the U-Net network contributes to a better segmentation of canine eye conditions. The first hypothesis addresses the core objective of demonstrating that a computer vision model, when applied to images captured in non-ideal, real-world settings, can accurately identify specific ocular conditions in canines. This aspect of the research is crucial because, unlike controlled laboratory environments, real-world veterinary practices often involve images that vary significantly in quality due to factors such as lighting, angle, and the behaviour of the animals during imaging. The second hypothesis related to the technical enhancement of the model itself, positing that by modifying both the input features and the architectural elements of the U-Net network, the model’s performance in segmenting canine eye conditions can be improved. This hypothesis suggests that through careful adjustments and refinements, the model can be made more robust and capable of handling the complexities associated with veterinary ophthalmic images, which often include a wide range of conditions and variations. Together, these hypotheses set the foundation for a comprehensive investigation into the applicability of advanced computer vision techniques in veterinary medicine, aiming to develop tools that could revolutionise the way canine eye diseases are diagnosed and managed in clinical settings. 1.2. Purpose and Objectives The purpose of this dissertation is to apply advanced computer vision techniques to improve the diagnosis of canine eye diseases. The research focuses on developing a novel dataset and deep learning models for disease recognition and segmentation in real-world conditions. The specific objectives, aligned with the expected scientific contributions, are: 1. An image dataset for machine learning of canine eye diseases: Creation of a publicly available, annotated dataset for training machine learning models on canine eye diseases. 2. Deep convolutional neural network model for recognition of canine eye clinical symptoms and diseases from still images in unconstrained environments: Development of a CNN-based model for identifying eye symptoms and diseases in images captured under varied conditions. 3. Deep neural network based on U-Net for segmentation of canine eye clinical symptoms from still images in unconstrained environments: Implementation of a U-Net-based model for segmenting clinical symptoms in canine eye images. 4. An improved method for segmentation of canine eye conditions based on U-Net: Refinement of the U-Net model to improve segmentation accuracy for diagnosing canine eye conditions. 1.3. Brief Review of Previous Research Computer vision has made significant strides in human medical diagnostics, with successful applications in ophthalmology, such as the detection of glaucoma and retinal diseases using Convolutional Neural Networks (CNNs). However, the application of similar technologies in veterinary medicine, particularly for diagnosing canine eye diseases, is limited. Previous studies have demonstrated the potential of CNNs for disease recognition in controlled environments, but there remains a gap in their application under real-world, unconstrained conditions. This research aims to bridge that gap by developing a tailored dataset and a specialised U-Net model for veterinary use. 1.4. Scientific Methods The research employs several scientific methods: 1. Data Collection and Annotation: A custom dataset was created using images collected from veterinary clinics and annotated by experts. 2. Deep Learning Model Development: The U-Net architecture was modified and trained using transfer learning and data augmentation techniques to improve segmentation accuracy. 3. Model Evaluation: The performance of the model was assessed using metrics such as the Jaccard Index and Dice Similarity Coefficient, followed by statistical analysis using ANOVA and Tukey HSD tests. 4. Application Development: A web-based tool was developed and deployed using Docker to ensure portability and scalability. 2. LITERATURE REVIEW The application of computer vision techniques has significantly advanced the field of medicine, particularly in the diagnosis and analysis of various medical conditions. Deep Convolutional Neural Networks (CNNs), a class of deep learning models, have been widely used in the field for tasks such as detecting glaucoma, diabetic retinopathy, and other retinal disease recognition tasks,
Vetnews | Oktober 2025 10 « BACK TO CONTENTS Leading Article providing high accuracy in ocular image analysis and disease classification [1], [2]. These advancements are attributed to CNNs’ ability to automatically extract and learn relevant features from ocular images, improving diagnostic accuracy compared to traditional manual methods [3], [4], [5]. The development of these CNN-based models has been supported by a variety of publicly available datasets, which have enabled the training and validation of models in human ophthalmology. Notable datasets include: ORIGA-light dataset focusing on optic nerve head segmentation and glaucoma assessment [6], DrishtiGS, designed for glaucoma detection, containing annotated fundus images for training CNNs [7], Retinal Fundus Image for Glaucoma Detection with a focus on glaucoma diagnosis [8], RIMONE, a large dataset for retinal image analysis, primarily used for glaucoma detection [9], iBUG focuses on facial landmark detection but has also been used for eye-tracking and ocular disease analysis [10], OpenEDS dataset that includes annotated eye images for eye-tracking and ocular disease applications [11], UBIRIS primarily for biometric purposes, but also applied in ocular disease diagnosis [12] and TEyeD dataset containing eye-tracking data, useful for studying eye movements and diagnosing conditions such as glaucoma [13]. These datasets have significantly contributed to the development of effective models for human ophthalmology. In contrast, the application of computer vision techniques in veterinary ophthalmology, particularly for canine eye diseases, is much less developed. Datasets and research in this field are scarce. Studies focusing on conditions such as canine glaucoma are limited [14], [15], [16], and the few available datasets are typically small and lack the diversity seen in human datasets. A notable study used CNNs to diagnose ulcerative keratitis in dogs, but it was constrained by a limited dataset and acquired under controlled conditions [17]. This highlights the need for more extensive datasets and research into canine ophthalmology. The U-Net architecture, introduced for biomedical image segmentation, has become a popular choice for various medical imaging tasks due to its encoder-decoder structure, which enables precise feature extraction and localisation [18]. U-Net’s architecture is particularly effective in segmenting medical images where pixel-level accuracy is critical. It has demonstrated robust performance across multiple domains, including ophthalmology, where it has been applied to tasks such as optic disc segmentation, retinal layer segmentation, and detecting diseases like glaucoma and cataracts [19], [20], [21], [22]. Studies have consistently shown that U-Net performs well in medical image analysis, confirming its relevance in both human and veterinary medical contexts [18], [19], [20], [23], [24], [25]. Transformer-based models, such as Swin Transformer [26] and SegFormer [27], have recently gained attention for their superior ability to capture global context, making them particularly effective in fine-grained segmentation tasks across human and veterinary fields. However, U-Net continues to be a formidable option, especially in situations with limited data availability, where it still performs robustly [28]. In fact, a comparison between U-Net and transformer-based architectures for medical image registration demonstrated that UNet, with minimal adjustments, can surpass the performance of these newer transformer models [28]. One of the key strengths of U-Net is its adaptability, particularly when combined with transfer learning. Transfer learning allows models to leverage pre-trained weights from large datasets, enabling them to perform well even on smaller datasets typical in veterinary applications. U-Net has been successfully combined with advanced CNN backbones like ResNet [29], EfficientNet [30], VGG [31], and Inception [32], significantly improving its feature extraction capabilities [4], [5]. These backbones help enhance the model’s ability to learn from limited data, making it well-suited for applications where obtaining large datasets is impractical. Furthermore, studies have shown that U-Net does not require an extensive dataset to achieve good results, especially when augmented with transfer learning techniques [33]. While U-Net can perform well with small datasets, the creation of synthetic datasets can further enhance model performance by augmenting the available training data. Synthetic data generation techniques, such as those using Generative Adversarial Networks (GANs) and diffusion models, have become valuable tools for augmenting datasets in various domains [34], [35]. Diffusion models, particularly Stable Diffusion, have shown promise in generating high-quality synthetic images by iteratively refining noisy images [36]. Stable Diffusion, which employs a U-Net-like architecture for image synthesis, can be used to create realistic synthetic images of diseased eyes, which can significantly improve the training of CNNs for canine ophthalmology [37], [38], [39]. These synthetic datasets could address the shortage of annotated data in veterinary ophthalmology and enhance model robustness [40]. A web-based application utilising a U-Net model trained on real and synthetic datasets could provide a valuable diagnostic tool for veterinarians. This application could assist in the early detection and treatment of canine eye diseases. In combination with image segmentation, the integration of Large Language Models (LLMs) could further enhance the diagnostic process. LLMs such as ChatGPT [41], Mistral [42], Gemini [43], Llama [44], and Claude [45] have demonstrated significant potential in medical data interpretation. These models, when integrated with image analysis tools, could help in interpreting symptoms, guiding veterinarians through diagnostic workflows, and improving decision-making. Evaluation metrics like BERTScore [46], CLIPScore [47], BLEU [48], METEOR [49], ROUGE [50], and SPICE [51] have shown that LLMs can effectively process complex medical information, making them suitable for integration into diagnostic applications. This research focuses on developing a mobile web application for diagnosing canine eye diseases by combining U-Net-based image segmentation with LLMs for interpreting medical symptoms. The U-Net model is trained on a custom dataset – DogEyeSeg4 [52], augmented with synthetic data generated using Stable Diffusion techniques. This study explores the integration of LLMs with medical image analysis, evaluating the potential of various LLMs such as ChatGPT, Mistral, Gemini, and others in improving diagnostic workflows. By combining these state-ofthe-art technologies, this research aims to advance the field of veterinary ophthalmology, providing tools that can assist in the early detection and treatment of eye diseases in dogs. 3. METHODOLOGY 3.1. Dataset In veterinary ophthalmology, especially in the niche of canine eye diseases, obtaining suitable datasets poses significant challenges.
Vetnuus | October 2025 11 Leading Article Unlike human medical datasets, which benefit from more standardised and widely available sources, veterinary datasets are relatively scarce. The images needed for training models in this domain must capture a variety of conditions across multiple breeds, often under nonideal circumstances. As a result, a custom dataset was developed to support this research, addressing the scarcity of available data. The following sections describe the DogEyeSeg4 dataset, constructed to overcome these challenges, as well as a synthetic dataset generated to augment the available real-world data. 3.1.1 DogEyeSeg4 Custom dataset Given the limited availability of publicly accessible canine ophthalmic datasets, the development of the DogEyeSeg4 dataset became essential. The process of gathering suitable data from real-world clinical environments posed several difficulties. First, patient compliance during eye examinations often impacted image quality. Dogs, being non-cooperative subjects, frequently moved during assessments, resulting in blurry images or requiring multiple attempts to capture usable data. Second, the diversity of breeds presented additional variability in eye structure, fur colouration, and size, all of which affected the clarity and focus of the images. These factors, while reflective of real-world conditions, made it difficult to obtain consistent, high-quality images that were necessary for robust model training. The images in the DogEyeSeg4 dataset were collected from two specialised veterinary ophthalmology clinics and a veterinary eye disease atlas [53]. Clinical environments, unlike controlled laboratory settings, introduce uncontrollable variables such as inconsistent lighting, non-standardised camera equipment, and varying angles of capture. These factors result in natural but challenging conditions for machine learning models. The lighting in veterinary clinics often varies depending on the location of the examination, leading to different levels of contrast and exposure in the images. Additionally, images were captured without staged conditions, meaning the dataset includes a range of natural scenarios rather than perfectly lit or artificially enhanced images. This variety adds to the complexity but also enhances the dataset’s applicability to real-world diagnostic conditions. The DogEyeSeg4 dataset consists of 145 images, which include both close-up images of the canine eye and full headshots. The example images with corresponding masks are shown in Figure 1. This diversity is crucial because veterinary practitioners often capture images that show either the entire head of the animal or focus specifically on the affected eye, depending on the diagnostic requirement. Close-up images provide detailed views of specific conditions, such as corneal cloudiness or excessive tearing, while whole-head images are more common in clinical settings and may capture multiple symptoms, including redness of the sclera or colored masses in the corner of the eye, from a broader perspective. Each image was resized to 320x320 pixels for consistency during model training, and the dataset is annotated with one-channel masks in PNG format. The four annotated classes, described in Figure 2.[54], correspond to the following symptoms: • S1: Cloudiness or haziness of the cornea, • S2: Redness of the sclera, • S3: Excessive tearing, • S4: A colored mass in the corner of the eye. These symptoms correspond to several common diseases, such as Cherry Eye, Glaucoma, Uveitis, Corneal Ulceration, and Bacterial Keratitis. To ensure the clinical relevance and accuracy of the dataset, each image and its annotation were reviewed by a veterinary specialist. This review process was critical in maintaining the diagnostic. Ensuring the dataset complied with data protection regulations, particularly the General Data Protection Regulation (GDPR) [55], was a critical part of the dataset creation process. All images included in the dataset were anonymised to safeguard the privacy of clients and their animals. Anonymisation involved the removal of any identifiable information, such as examination dates, client names, and animal identifiers. By adhering to GDPR guidelines, the dataset was rendered compliant with strict privacy regulations. Additionally, since the images were gathered during standard Figure 1. Example of images in DogEyeSeg4 dataset with corresponding masks showing closeup of an eye (upper row) and whole head (lower row) Figure 2. Visual representation of medical symptoms in the top-left to bottom-right order: S1 - cloudiness or haziness of the cornea; S2 - sclera redness; S3 - excessive tears; and S4 - colored mass protrusion in the corner of the eye. Precision of the dataset, especially given the variability in patient behaviour and the nonstandardized clinical conditions under which the images were captured
Vetnews | Oktober 2025 12 « BACK TO CONTENTS veterinary care and no animals were harmed for the purpose of image acquisition, ethical approval was not required for this study. This was made explicit in the ethics and consent statement accompanying the dataset: “Images were collected as part of routine clinical evaluations, and no ethical approval was necessary as no harm was caused to the animals.” One of the key challenges in assembling this dataset was balancing the representation of each class. For example, some symptoms, such as excessive tearing (S3), appeared more frequently than others, such as the colored mass (S4). Without careful selection, this imbalance could lead to biased model training, where the model becomes overly tuned to detecting the more frequent symptoms. Therefore, a meticulous curation process was employed to ensure a balanced distribution across all four classes. To further enhance the dataset’s utility, data augmentation techniques were applied. Building on methodologies from medical image segmentation, particularly with respect to U-Net architecture, augmentation methods such as horizontal flipping, rotation by up to 15 degrees, and translation by 50 pixels were used as seen in Figure 3. [56]. These augmentations introduce variation in camera angles and positions, simulating different real-world scenarios where veterinary professionals might capture images from slightly different perspectives. Importantly, zoom augmentation was avoided to prevent interpolation, which could introduce noise into the masks and degrade the accuracy of the annotations. The augmentation process expanded the dataset to 200 images, increasing its robustness and making it more suitable for training machine learning models that need to generalise across diverse imaging conditions. 3.1.2 Synthetic datasets Due to the limited availability of real-world data and the need for large datasets to train deep learning models, synthetic image generation has become a valuable resource in medical imaging, including veterinary ophthalmology. Several techniques exist for generating synthetic images, each offering strengths and weaknesses in tackling this issue. These methods include Generative Adversarial Networks (GANs) [57], Variational Autoencoders (VAEs) [58], and Diffusion Models [59]. Additionally, procedural methods like rule-based systems and 3D rendering techniques are also applied for synthetic image generation, offering high control over image features, albeit with increased manual intervention and resource demand [60]. GANs utilise a two-part system, where the generator creates images, and the discriminator attempts to distinguish them from real images. While GANs are capable of producing high-resolution, realistic images, they are computationally expensive and often unstable during training, with issues like “mode collapse,” where limited variations of images are generated. VAEs, by contrast, are probabilistic models that encode input data into a latent space and then decode it back, generating new data by sampling from this latent space. VAEs are more stable to train than GANs but tend to produce lower-resolution images that may lack the detailed features required in tasks such as ophthalmic disease diagnosis. Diffusion models, particularly the Stable Diffusion variant used in this study, offer a more balanced approach, combining computational efficiency with high-quality image generation. A significant feature of diffusion models is their reliance on the U-Net architecture. In diffusion models, the U-Net serves as the backbone for the denoising process, transforming random noise into coherent images. This structure makes diffusion models particularly well-suited for generating structured medical images [61]. Beyond learning-based techniques, rule-based image generation and 3D rendering techniques are also used in certain fields for procedural image creation [60]. Rule-based systems apply predefined algorithms to generate images, offering high customisation but requiring extensive manual intervention. Rendering techniques simulate 3D environments and lighting to create highly detailed images, but these methods demand considerable setup and computational resources [62]. Figure 3. Example of applied augmentations on original image in top row: horizontal flip (middle row left), horizontal shift (middle row right), rotation (bottom row left), and vertical shift (bottom row right) Figure 4. Stable diffusion generated images with fixed posture of different dog breeds. Images shows high degree of reality with certain difficulty like double set of ears Leading Article
Vetnuus | October 2025 13 For this study, Stable Diffusion was selected due to its balance between image quality and computational efficiency. Example images generated using Stable Diffusion can be observed in Figure 4. [63]. Diffusion models, particularly those built with the U-Net architecture, offer significant advantages in terms of structural detail and precision, which are crucial in medical applications like ophthalmology. The U-Net backbone, with its skip connections, helps preserve the finegrained information necessary to capture subtle disease symptoms in canine eye images [18]. Stable Diffusion was further enhanced using Low-Rank Adaptation (LoRA), which allowed for parameter-efficient fine-tuning of the model, reducing both time and computational resources [64]. By using a small subset of real-world images from the DogEyeSeg4 dataset, Stable Diffusion was fine-tuned to specialize in generating images that accurately depict canine ophthalmic diseases like Glaucoma, Cherry Eye, and Uveitis. Example of such images using LoRA are presented in Figure 5. [63]. While rule-based systems and 3D rendering methods provide high control over specific image features, their high manual effort and resource demands made them less practical for large-scale image generation compared to Stable Diffusion [62]. In diffusion models, the process of image generation involves gradually transforming random noise into coherent images. This denoising process is managed by a U-Net architecture, which effectively captures both high-level structures and fine details. The model is trained to learn how to reverse the noise-adding process, allowing it to generate clean, high-quality images from noisy input during inference [59], [61]. In this study, the fine-tuning of Stable Diffusion was performed using custom LoRA, which reduced memory requirements by updating only a subset of the model’s parameters [65]. The use of a U-Net backbone allowed the model to capture the intricate details of various eye diseases, making it particularly effective for generating clinically relevant images of canine ophthalmic conditions. One of the key features of Stable Diffusion in this study was its ability to add disease symptoms to otherwise healthy eyes using inpainting [66], [67]. Inpainting allows for localised changes to specific regions of an image while leaving the rest of the image unchanged. This feature was valuable for introducing symptoms like scleral redness or corneal cloudiness into healthy eye images, which can be examined in Figure 6. [63]. Example image. Using prompts such as “cloudy cornea with red sclera,” the model could generate localised disease manifestations, allowing for the creation of synthetic images that accurately represented various stages of disease progression. This capability enhanced the diversity of the dataset, ensuring that a wide range of symptoms and severities were represented. In addition to generating new images, diffusion models offer unique opportunities for augmentation: • Symptom Severity Modification: Adjusting the text prompts allowed for the generation of images with varying degrees of disease severity, from mild to severe symptoms [38]. • Symptom Combination: Diffusion models can create images with multiple symptoms, such as excessive tearing and a colored mass, replicating complex real-world cases. • Stochastic Variability: The inherent randomness in diffusion models ensures that even with the same prompt, slight variations occur in the generated images, further diversifying the dataset without requiring additional real-world data collection [37]. Advantages of using Synthetic images: • Scalability of Data: Synthetic image generation facilitates the creation of extensive datasets, which are particularly valuable for rare conditions or underrepresented canine breeds. This approach addresses the scarcity of real-world data, allowing researchers to simulate a wide range of clinical scenarios that might otherwise be difficult to capture [68]. • Efficiency in Cost and Time: Compared to the collection and annotation of real-world images, synthetic data can be generated rapidly and at a significantly lower cost. This enables the efficient scaling of datasets required for training deep learning models, reducing the time and resources associated with manual data acquisition [69]. Figure 5. Close-up Stable Diffusion images using custom LoRA describing various medical conditions based on the DogEyeSeg4 dataset Figure 6. Stable Diffusion generated image containing symptoms using Inpainting showing (a) healthy eyes, left eye with: (b) prolapsed eyelid gland, (c) red sclera, (d) cloudy cornea, (e) epiphora and (f) all previously mentioned symptoms Leading Article >>>14
Vetnews | Oktober 2025 14 « BACK TO CONTENTS • Controlled Variability: Synthetic generation allows researchers to precisely control the inclusion of specific symptoms, conditions, and their severity. This level of control ensures that the dataset remains balanced, mitigating issues related to class imbalance, and comprehensively covering the spectrum of disease presentations [69]. • Ethical Benefits: The use of synthetic images circumvents the need for invasive clinical procedures or additional veterinary visits. As a result, it provides an ethically sound method for expanding datasets without subjecting animals to unnecessary tests or discomfort[40]. Limitations when synthetic images are used: • Realism Constraints: Although synthetic image generation has made significant advancements, the resulting images may still exhibit subtle artefacts or unrealistic features. These imperfections could lead to a degradation in model performance when applied to real-world scenarios, as the models may struggle to generalise from synthetic to actual clinical data [39]. • Bias Propagation: Synthetic datasets, while generated artificially, can inadvertently carry over biases from the real-world data used in the finetuning process. This issue may limit the generalizability of models trained exclusively on synthetic data, as the diversity and complexity of real-world cases might not be fully captured [36]. • Necessity for Clinical Validation: Despite their utility, synthetic images require thorough validation to ensure their clinical relevance. Without rigorous validation processes, models trained on synthetic data might underperform when deployed in real-world clinical settings, especially when tasked with recognising nuanced or rare conditions [40]. 3.2 Model Architecture and Training 3.2.1 U-Net The U-Net architecture is widely regarded as a robust model for image segmentation, particularly in the field of biomedical image analysis. U-Net excels at pixel-wise classification tasks. Its architecture consists of two symmetric parts: an encoder (contracting path) and a decoder (expanding path), forming a U-shaped structure that allows the network to both extract features and reconstruct spatial details at the pixel level. The encoder in U-Net serves to downsample the input image through a series of convolutional and max-pooling layers, extracting increasingly abstract features. Each convolutional block consists of two 3x3 convolutional layers followed by a ReLU activation function. The downsampling occurs through max-pooling layers, which reduce the spatial resolution by a factor of two at each step. This progression allows the U-Net to capture the local features in early layers and more complex global patterns in deeper layers. Visual representation of the U-Net architecture is shown in Figure 7. [56]. At the deepest point in the architecture, the bridge links the encoder and decoder by combining features learned from the encoder’s deepest layer with features from the decoder’s shallowest layer. This design ensures that global and local features are propagated through the network, facilitating accurate segmentation even in cases of complex images with small or subtle regions of interest [70]. The combination of these feature maps is essential for generating detailed and contextually accurate segmentations, particularly in medical applications where high precision is critical. The decoder mirrors the encoder, progressively upsampling the feature maps and restoring the spatial resolution of the input image. The decoder combines the upsampled feature maps with the feature maps from the corresponding encoder layers via skip connections. These skip connections help retain the spatial context and fine details from the encoder, ensuring that the segmentation is accurate at a pixel level [18], [71]. Figure 7. The standard U-Net architecture, where the left encoder part and right decoder part form the letter U. The encoder effectively captures detailed features from the input images, which are then upsampled by the decoder part of the U-Net. The decoder uses transposed convolutions to restore the image to its original size, using skip connections from the corresponding encoder layers to refine the segmentation output with high-resolution details retained [56] Leading Article
RkJQdWJsaXNoZXIy OTc5MDU=